After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted the particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We revisit a simple Learning-from-Scratch baseline for visuo-motor control that uses data augmentation and a shallow ConvNet. We find that this baseline has competitive performance with recent methods that leverage frozen visual representations trained on large-scale vision datasets.
translated by 谷歌翻译
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations). Code and videos are available at: https://nicklashansen.github.io/modemrl
translated by 谷歌翻译
Graph Neural Networks (GNNs) are deep learning models designed to process attributed graphs. GNNs can compute cluster assignments accounting both for the vertex features and for the graph topology. Existing GNNs for clustering are trained by optimizing an unsupervised minimum cut objective, which is approximated by a Spectral Clustering (SC) relaxation. SC offers a closed-form solution that, however, is not particularly useful for a GNN trained with gradient descent. Additionally, the SC relaxation is loose and yields overly smooth cluster assignments, which do not separate well the samples. We propose a GNN model that optimizes a tighter relaxation of the minimum cut based on graph total variation (GTV). Our model has two core components: i) a message-passing layer that minimizes the $\ell_1$ distance in the features of adjacent vertices, which is key to achieving sharp cluster transitions; ii) a loss function that minimizes the GTV in the cluster assignments while ensuring balanced partitions. By optimizing the proposed loss, our model can be self-trained to perform clustering. In addition, our clustering procedure can be used to implement graph pooling in deep GNN architectures for graph classification. Experiments show that our model outperforms other GNN-based approaches for clustering and graph pooling.
translated by 谷歌翻译
Realizing when a model is right for a wrong reason is not trivial and requires a significant effort by model developers. In some cases, an input salience method, which highlights the most important parts of the input, may reveal problematic reasoning. But scrutinizing highlights over many data instances is tedious and often infeasible. Furthermore, analyzing examples in isolation does not reveal general patterns in the data or in the model's behavior. In this paper we aim to address these issues and go from understanding single examples to understanding entire datasets and models. The methodology we propose is based on aggregated salience maps. Using this methodology we address multiple distinct but common model developer needs by showing how problematic data and model behavior can be identified -- a necessary first step for improving the model.
translated by 谷歌翻译
We discuss a platform that has both software and hardware components, and whose purpose is to support research into characterizing and mitigating the sim-to-real gap in robotics and vehicle autonomy engineering. The software is operating-system independent and has three main components: a simulation engine called Chrono, which supports high-fidelity vehicle and sensor simulation; an autonomy stack for algorithm design and testing; and a development environment that supports visualization and hardware-in-the-loop experimentation. The accompanying hardware platform is a 1/6th scale vehicle augmented with reconfigurable mountings for computing, sensing, and tracking. Since this vehicle platform has a digital twin within the simulation environment, one can test the same autonomy perception, state estimation, or controls algorithms, as well as the processors they run on, in both simulation and reality. A demonstration is provided to show the utilization of this platform for autonomy research. Future work will concentrate on augmenting ART/ATK with support for a full-sized Chevy Bolt EUV, which will be made available to this group in the immediate future.
translated by 谷歌翻译
我们在本文中介绍了我们认为是视频游戏机翻译的首次尝试之一。我们的研究表明,只有有限的内域数据训练的模型超出了可公开可用的系统,随后的人类评估揭示了最终翻译中的有趣发现。本文的第一部分介绍了视频游戏翻译的一些挑战,一些现有文献以及本实验中使用的系统和数据集。最后一节讨论了我们对所得翻译的分析以及这种自动化系统的潜在好处。一个这样的发现突出了该模型学习从英语到法语的视频游戏翻译的典型规则和模式的能力。因此,我们的结论表明,鉴于令人鼓舞的结果,工作的高度重复性以及翻译人员在该领域中通常不良的工作条件,视频游戏机译的具体情况可能非常有用。但是,与文化部门中MT的其他用例一样,我们认为这在很大程度上取决于该工具的适当实施,该工具应与人类翻译人员进行交互方式来刺激创造力,而不是为了生产力而不是原始的后编辑。
translated by 谷歌翻译
多目标优化(MOO)旨在同时优化多个冲突的目标,并在机器学习中发现了重要的应用,例如最大程度地减少分类损失和差异,以在处理不同的人群方面以保持公平。最佳性,进一步优化一个目标至少将至少损害另一个目标,而决策者需要全面探索多个Optima(称为Pareto Front),以确定一个最终解决方案。我们解决了寻找帕累托阵线的效率。首先,使用随机多偏差下降(SMGD)从头开始寻找前部,对于大型神经网络和数据集很昂贵。我们建议基于预测器 - 校正方法来探索帕累托阵线作为一些初始Optima的歧管。其次,对于每个探索步骤,预测变量求解一个大规模的线性系统,该系统在模型参数数量中二次缩放,并且需要一个反向传播来评估求解器的二阶Hessian-vector产品。我们提出了一个只能线性缩放的高斯 - 纽顿近似,并且只需要每次迭代的一阶内产物。这还允许在大约求解线性系统时,在微小和共轭梯度方法之间进行选择。这些创新使大型网络成为可能的预测器 - 校准。关于多目标(公平和准确性)错误信息检测任务的实验表明,1)预测器 - 矫正器方法可以在更少的时间内找到比或与SMGD更好或与SMGD相似的方法; 2)提出的一阶方法不会损害二阶方法识别的帕累托前沿的质量,同时进一步缩短了运行时间。
translated by 谷歌翻译
自动评估摘要的连贯性具有重要意义,既可以实现成本效益的摘要评估,又可以通过选择高分候选候选摘要来提高连贯性。尽管已经提出了许多不同的方法来建模摘要相干性,但通常使用不同的数据集和指标对其进行评估。这使得很难理解他们的相对性能,并确定朝着更好的摘要连贯建模的方法。在这项工作中,我们对各种方法进行了大规模研究,以进行均匀的竞争环境建模。此外,我们介绍了两项新的分析措施,即系统内相关性和偏置矩阵,它们有助于确定相干度量的偏见,并为系统级混杂因素提供鲁棒性。尽管当前可用的自动连贯性措施都无法为所有评估指标的系统摘要分配可靠的连贯分数,但对自我监督任务进行了微调的大规模语言模型显示出令人鼓舞的结果,只要微调会考虑在内他们需要在不同的摘要长度上概括。
translated by 谷歌翻译